Towards Neuronal Deep Fakes: Data Driven Optimization of Reduced Neuronal Models

author: Russell Jarvis, PhD Neuroscience. ICON Laboratory. Co-advisors: Prof Richard Gerkin, Prof Sharon Crook. Committee: Yi Zhou, James Abbas. date: 5 November 2020

Allen Cell Types Experimental Data

Slicing into Error Hyper-Volume

It is hard to get insight into what the genetic algorithm “sees” one approach is to construct 3D error surfaces from 2D pairs. In 10 dimensions there are 40 unique pairs

Slicing into Error Hyper-Volume 2

Slicing into Error Hyper-Volume 2

You can zoom in and take a look at some of 2D parameter pairs.

Virtual Experiments

voltage clamp

These plots also show you something else. Is I said previously the GA samples sparsely and incompletely. This means surfaces appear irregular, because polygon positions are centered semi randomly. You can see that the surface is corrogated from left to right in the top pane and up-down in the bottom pane.

{ width: 1000; height: 1000}

Current Voltage Breakdown

{ width: 1000; height: 1000} Experiment and fitted model both fire at 94 times. For the Izhikevich model to achieve fits on spike times, and voltage base, spike amplitude were traded-off.

Introduction

Need to Improve Medicine and Artificial Intelligence. Need better Electrical Models Animal Experiments are limited. Brain simulations would be better if models where faster and more accurate.

Virtual Experiments

What they are and why? Neuronal models are virutal experiments.

Introduction

Identify the counterfit

  • Negative results are important.
  • Fitting to the mean is a bad idea. I show that fitting to the mean measurement, may work often, but it is also based on flawed methodological assumptions.
  • Above threshold spiking fits ‘spot the fake’ part 2.
  • Preferred current versus fixed current search.
  • optimal still possible despite rastrigrin’s function

Work

4.1 What is Required for Successful Optimization?For optimization to both succeed and be useful, several criteria must be met (Van Geitet al.,2007): Relevance , Speed , Smoothness

Relevance The objective function should reflect fundamental and important properties of the data that a good model would reproduce. It would be easy to only

Speed The objective function should be fast to calculate, since typically a large number (potentially millions) of evaluations are performed during the search, many of which may require re-simulation of the model.

Efficient Convergence: The solution space should be as continuous and convex as possible,so that the search algorithm can rapidly converge to a global optimum

Bolstered E Marders Claims

Marder considered conductance based models of somato-gastro ganglion cell in lobster we consider two classes of reduced models in broad categories of experimental cells

======================================================== Simulation as an Experimental Platform: The Need for Speed ======================================================== - Mean model not mean measurement - Above threshold spiking fits ‘spot the fake’ part 2. - Preferred current versus fixed current search. - optimal still possible despite rastrigrin’s function

Art and Science

Identify the Features (ingredients) that will add up to good recipes

Science

Show how some ingredients lead to bad recipeas.

Action Potential Amplitudes

Action Potential height as it varies along the spike train.

After hyperpolarisation Potential.

Models and Data are Readily Distinguishable in a Reduced Dimension Space

48 out of 240 features.

Models and Data are Readily Distinguishable in a Reduced Dimension Space

Experiments from different brain regions are distiniushible.

Experiments versus models are distinuishable

Models from different brain regions cluster together.

title

Alternatives to Eve Marder Dilemna

Fitting to experimental means can work but is not reliable. Depends on assumptions of data covariance. Alternatives: Fit to the whole trace of a single experiment.

Models and Data are Readily Distinguishable in a Reduced Dimension Space

2nd Eigen Vector

Feature Name Feature Description Extraction Library Stimulus Strength
fast-trough-index Index into array when begging of trough occurs Allen 1.5× Rheobase
peak-index-1.5x 0.12 27.8
upstroke-index-1.5x 0.18 13.8
peak-index indexs 0.09 24.4
threshold-index

fast-trough-index Index into array when begging of trough occurs Allen 1.5× Rheobase peak-index-1.5x Index into array when peaks occurs Allen 1.5× Rheobase upstroke-index-1.5x index into array of detection of first upward phase of AP Allen 1.5× Rheobase threshold-index-1.5x Description Allen 1.5× Rheobase fast-trough-time The time when a trough is commenced Allen 1.5× Rheobase fast-trough-index Indexs into array when the start of a trough is entered Allen 3.0× Rheobase peak-index indexs into array when voltage peak(s) occur Allen 3.0× Rheobase upstroke-index Index into array when first upward phase of a spike commences Allen 3.0× Rheobase threshold-index Index into array when threshold(s) are surpassed Allen 3.0× Rheobase

How Genetic Algorithm Works

An engine that drives the whole work True solution inaccessible \(10^{n}\) Wont get trapped

Identify the counterfit

Identify the counterfit

Identify the counterfit 2

Identify the counterfit

Spot the fake part 3.

High density firing without adaptation

IZHIkevich_fit_60Adexp_80.jpg

Mean model not equal to model mean

\((a-b)/2\)

skewed_distribution.png

reproduced_izhi.png

What is a Feature

Visualization of Eigen Vector Loadings

Data Driven Optimization can be Fragile. Optimization Needs Special Conditions ======= ## Possible Causes of Failure: * 1 Models are not flexible enough to recapitulate important variance in data * 2 Data is reliable but misrepresented (1 & 3 are sound but). * 3 (1 & 2 are sound but) Error Surfaces lack learnable information.

Controls. 1 and 3 can be controlled by randomly simulating data, but checking the learnability of error surfaces. * * In my work I found evidence for all 3 types of failure, but before any problems 2. Cannot be directly controlled but the data can be interrogated.

Hardening Optimization, by Controlling for failure.

Possible Causes of Failure:

  • 1 Models are not flexible enough to recapitulate important variance in data
  • 2 Data is spurious (1 & 3 are sound but).
  • 3 (1 & 2 are sound but) Error Surfaces lack learnable information.

Controls. 1 and 3 can be controlled by randomly simulating data, but checking the learnability of error surfaces. 2. Cannot be directly controlled but the data can be interrogated.

The Need Speed

In order to get a picture of what was going wrong. To reveal all of this. Must do many virtual experiments. To do many experiments quickly I needed faster models so I had to rewrite models using code accelerators. as there were very many different types of experiments I would need to do in a short amount of time

Although Genetic Algorithms are Overall Robust

In neuronal modelling They still are fragile in the sense that they benefit From human design supervision and testing

Can exploit global convexity of complex surface satisficing Still Vulnerable to poor learning environments Still benefit from supervision and intervention.

Can fall back to random sampling With memory of best.

Error Surface Defects

The 10% of surfaces that are practical to visualize.

12 variables

E Marder

Showed What can go wrong when fitting models to the mean of electrical neuron data when the mean and variance violates assumptions of normal distributions.snk

friendly_error_surface.png

Error Surface Defects

In 10 dimensions 40 unique pairs of dimensions in error hyprevolume

That is a mess.

Error Surface Defects

Contributions to Modeling:

Two fast models. Auto code generation to make novel feature/data combinations.

High dimensional exploration of variance in data and models.

Contributions to Science:

  • Recipe for fitting to Izhikevich and AdEx models to
  • spike train shape+AP times
  • recipe for fitting Izhikevich and AdEx models to FI curves.
  • better understanding of model limits (shape is often incompatible with firing frequency current relationship.) Probably because of underlying representations of capacitance, and resistance (a,b), are more like fudge factors than anything else.

Contributions to Science:

  • Spike shape and spike times seem conflicted.
  • More complex models don’t necessarily fit better.
  • Reduced Models not good at fitting to time constant.
  • Demonstrated Reasons why fitting to the mean of neuron electrical experiment data is not a good idea.
  • main reason is bi-modality, second reason is variance structure (skewed).

Contributions to Science:

  • Fitting to FIcurve usually possible
  • Spike shape and spike times seem conflicted.
  • More complex models don’t necessarily fit better.
  • Reduced Models not good at fitting to time constant.
  • Demonstrated Reasons why fitting to the mean of neuron electrical experiment data is not a good idea.
  • main reason is bi-modality, second reason is covariance structure.

Contributions to Science:

  • Reduced models could usually fit to FI-curves of experiments.
  • Reduced models could fit Some types of spike trains quite well.
  • Reduced models could be over-fitted to spike shape.

Models are not flexible enough or over fitting or both

When data is good, you could fit a model to the spike times and spike shapes in waveforms. but only for a single current injection value

Table

specimen id FI Slope Gradient TimeConstantTest RestingPotentialTest InputResistanceTest RheobaseTest
623960880 0.18 23.8 -65.1 241 70
623893177 0.12 27.8 -77 136 190
471819401 0.18 13.8 -77.5 132 190
482493761 0.09 24.4 -71.6 132 70

The End: Acknowledgements:

This body of work was a large international team effort that was only possible because of continuous attention from diverse faculty at ASU. First and foremost I would like to thank my committee: Professor Sharon Crook, Professor Rick Gerkin, Professor Jimmy Abbass, and Professor Yi Zhou. Additionally Sharon Crook and Jimmy Abbas offered significant unexpected personal support. Lastly, Rick poured many hours of his life into consulting about this project. Thanks generally to the ASU research community

What is publishable Now:

Mean model not mean measurement. * Problems of using the mean to optimize with.

Pre-emptive Question Slides from committee.

9 * 9 dimensions

did it work?

To find out about the brain we can do virtual experiments.

speed, results might violate your assumptions, and could be not what is expected, provoking more experiments.

speed 3. The most common type of result is tentative, to get results that are consistent in a system, need prompt feedback.

smoothness 3 , and relevance 6.

What is publishable With More work:

References

Ball, Gareth, Paul Aljabar, Sally Zebari, Nora Tusor, Tomoki Arichi, Nazakat Merchant, Emma C Robinson, et al. 2014. “Rich-Club Organization of the Newborn Human Brain.” Proceedings of the National Academy of Sciences 111 (20): 7456–61.

Banino, Andrea, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel, et al. 2018. “Vector-Based Navigation Using Grid-Like Representations in Artificial Agents.” Nature 557 (7705): 429–33.

Billeh, Yazan N, Binghuang Cai, Sergey L Gratiy, Kael Dai, Ramakrishnan Iyer, Nathan W Gouwens, Reza Abbasi-Asl, et al. 2020. “Systematic Integration of Structural and Functional Data into Multi-Scale Models of Mouse Primary Visual Cortex.” Neuron.

Birgiolas, J, VR Haynes, RJ Jarvis, RC Gerkin, and SM Crook. 2019. “NeuroML-Db: A Model Sharing Resource That Promotes Rapid Selection and Reuse.” International Neuroinformatics Coordinating Facility Congress, Warsaw, Poland.

Birgiolas, Justas. 2019. “Towards Brains in the Cloud: A Biophysically Realistic Computational Model of Olfactory Bulb.” PhD thesis, Arizona State University.

Birgiolas, Justas, Richard Gerkin, and Sharon Crook. 2016. “Rapid Selection of Neuroml Models via Neuroml-Db. Org.” NEURON 28 (10): 2063–90.

Blue-Brain-Developers. 2017. “BBP Neocortical Microcircuit Portal.” http://microcircuits.epfl.ch/#/animal/8ecde7d1-b2d2-11e4-b949-6003088da632.

Brette, Romain, and Wulfram Gerstner. 2005. “Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity.” Journal of Neurophysiology 94 (5): 3637–42.

Carnevale, Nicholas T, and Michael L Hines. 2006. The Neuron Book. Cambridge University Press.

Colquhoun, David. 1994. “Ion Channels of Excitable Cells (Methods in Neurosciences Vol. 19): Edited by Toshio Narahashi, Academic Press, 1994,£ 65.00 (Xiii+ 387 Pages) Isbn 0 12 185287 3.” Trends in Biochemical Sciences 19 (9): 389.

Davison, Andrew. 2020a. “AdExp Neuron with deltaT=0 Doesnt Produce Spikes with Brian Backend Issue 370 Neuralensemble/Pynn.” GitHub. https://github.com/NeuralEnsemble/PyNN/issues/370.

———. 2020b. “PyNN.neuron Implementation of Adexp Is Unstable, Gives Poor Results Issue 266 Neuralensemble/Pynn.” GitHub. https://github.com/NeuralEnsemble/PyNN/issues/266.

Davison, Andrew P, Daniel Brüderle, Jochen M Eppler, Jens Kremkow, Eilif Muller, Dejan Pecevski, Laurent Perrinet, and Pierre Yger. 2009. “PyNN: A Common Interface for Neuronal Network Simulators.” Frontiers in Neuroinformatics 2: 11.

Deb, Kalyanmoy, Samir Agrawal, Amrit Pratap, and Tanaka Meyarivan. 2000. “A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-Ii.” In International Conference on Parallel Problem Solving from Nature, 849–58. Springer.

Denker, M., A. Yegenoglu, and S. Grün. 2018. “Collaborative HPC-Enabled Workflows on the HBP Collaboratory Using the Elephant Framework.” https://doi.org/10.12751/incf.ni2018.0019.

Di Luca, Nutt, M. 2018. “Towards Earlier Diagnosis and Treatment of Disorders of the Brain. Bulletin of the World Health Organization.” $http://doi.org/10.2471/blt.17.206599$.

Draganski, Bogdam, and Arne May. 2008. “Training-Induced Structural Changes in the Adult Human Brain.” Behavioural Brain Research 192 (1): 137–42.

Druckmann, Shaul, Yoav Banitt, Albert A Gidon, Felix Schürmann, Henry Markram, and Idan Segev. 2007. “A Novel Multiple Objective Optimization Framework for Constraining Conductance-Based Neuron Models by Experimental Data.” Frontiers in Neuroscience 1: 1.

Druckmann, Shaul, Thomas K Berger, Sean Hill, Felix Schürmann, Henry Markram, and Idan Segev. 2008. “Evaluating Automated Parameter Constraining Procedures of Neuron Models by Experimental and Surrogate Data.” Biological Cybernetics 99 (4-5): 371.

Druckmann, Shaul, Sean Hill, Felix Schürmann, Henry Markram, and Idan Segev. 2013. “A Hierarchical Structure of Cortical Interneuron Electrical Diversity Revealed by Automated Statistical Analysis.” Cerebral Cortex 23 (12): 2994–3006.

EFEL-Developers. 2018. “EFEL Documentation.” https://efel.readthedocs.io/en/latest/eFeatures.html.

Fortin, Félix-Antoine, François-Michel De Rainville, Marc-André Gardner, Marc Parizeau, and Christian Gagné. 2012. “DEAP: Evolutionary Algorithms Made Easy.” Journal of Machine Learning Research 13: 2171–5.

Friedrich, Péter, Michael Vella, Attila I Gulyás, Tamás F Freund, and Szabolcs Káli. 2014. “A Flexible, Interactive Software Tool for Fitting the Parameters of Neuronal Models.” Frontiers in Neuroinformatics 8: 63.

Garcia, Samuel, Domenico Guarino, Florent Jaillet, Todd R Jennings, Robert Pröpper, Philipp L Rautenberg, Chris Rodgers, et al. 2014. “Neo: An Object Model for Handling Electrophysiology Data in Multiple Formats.” Frontiers in Neuroinformatics 8: 10.

Garcia, S., D. Guarino, F. Jaillet, T. Jennings, R. Prpper, P. L. Rautenberg, C. C. Rodgers, et al. 2014. “Neo: an object model for handling electrophysiology data in multiple formats.” Front Neuroinform 8: 10.

Gerkin, Richard C., Justas Birgiolas, Russell J. Jarvis, Cyrus Omar, and Sharon M. Crook. 2019. “NeuronUnit: A Package for Data-Driven Validation of Neuron Models Using Sciunit.” bioRxiv. https://doi.org/10.1101/665331.

Gerstner, Wulfram, Werner M Kistler, Richard Naud, and Liam Paninski. 2014. Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge University Press.

Gerstner, Wulfram, and Richard Naud. 2009. “How Good Are Neuron Models?” Science 326 (5951): 379–80.

Gleeson, Padraig, Matteo Cantarelli, Boris Marin, Adrian Quintana, Matt Earnshaw, Sadra Sadeh, Eugenio Piasini, et al. 2019. “Open Source Brain: A Collaborative Resource for Visualizing, Analyzing, Simulating, and Developing Standardized Models of Neurons and Circuits.” Neuron 103 (3): 395–411.

Gleeson, Padraig, Sharon Crook, Robert C Cannon, Michael L Hines, Guy O Billings, Matteo Farinella, Thomas M Morse, et al. 2010. “NeuroML: A Language for Describing Data Driven Models of Neurons and Networks with a High Degree of Biological Detail.” PLoS Comput Biol 6 (6): e1000815.

Goldin, Matı́as A, and Gabriel B Mindlin. 2017. “Temperature Manipulation of Neuronal Dynamics in a Forebrain Motor Control Nucleus.” PLoS Computational Biology 13 (8): e1005699.

Gouwens, Nathan W, Jim Berg, David Feng, Staci A Sorensen, Hongkui Zeng, Michael J Hawrylycz, Christof Koch, and Anton Arkhipov. 2018. “Systematic Generation of Biophysically Detailed Models for Diverse Cortical Neuron Types.” Nature Communications 9 (1): 1–13.

Hertäg, Loreen, Joachim Hass, Tatiana Golovko, and Daniel Durstewitz. 2012. “An Approximation to the Adaptive Exponential Integrate-and-Fire Neuron Model Allows Fast and Predictive Fitting to Physiological Data.” Frontiers in Computational Neuroscience 6: 62.

Hill, Sean, and Henry Markram. 2008. “The Blue Brain Project.” In 2008 30th Annual International Conference of the Ieee Engineering in Medicine and Biology Society, clviii–clviii. IEEE.

Hodgkin, Alan L, and Andrew F Huxley. 1952. “A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve.” The Journal of Physiology 117 (4): 500.

Institute, Allen. 2015. “Allen Cell Types Database.” https://celltypes.brain-map.org.

Izhikevich, Eugene M. 2003. “Simple Model of Spiking Neurons.” IEEE Transactions on Neural Networks 14 (6): 1569–72.

Jarvis, Russell. 2020a. “JitHub: A Collection of Jit-Enabled Reduced Neuron Models.” GitHub. https://github.com/russelljjarvis/jit_hub.git.

Jolivet, Renaud, Felix Schürmann, Thomas Berger, Richard Naud, Wulfram Gerstner, and Arnd Roth. 2008. “The Quantitative Single-Neuron Modeling Competition.” Biological Cybernetics 99 (December): 417–26. https://doi.org/10.1007/s00422-008-0261-x.

Jones, Edward G. 1999. “Golgi, Cajal and the Neuron Doctrine.” Journal of the History of the Neurosciences 8 (2): 170–78.

Lam, Siu Kwan, Antoine Pitrou, and Stanley Seibert. 2015. “Numba: A Llvm-Based Python Jit Compiler.” In Proceedings of the Second Workshop on the Llvm Compiler Infrastructure in Hpc, 1–6.

Luebke, Jennifer I. 2017. “Pyramidal Neurons Are Not Generalizable Building Blocks of Cortical Networks.” Frontiers in Neuroanatomy 11: 11.

Maechler, Martin. 2013. “Package ‘Diptest’.” R Package Version 0.75–5. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing.

Mainen, Zachary F, and Terrence J Sejnowski. 1995. “Reliability of Spike Timing in Neocortical Neurons.” Science 268 (5216): 1503–6.

Marder, Eve, and Adam L Taylor. 2011. “Multiple Models to Capture the Variability in Biological Neurons and Networks.” Nature Neuroscience 14 (2): 133–38.

Markram, Henry. 2006. “The Blue Brain Project.” Nature Reviews Neuroscience 7 (2): 153–60.

Markram, Henry, Eilif Muller, Srikanth Ramaswamy, Michael W Reimann, Marwan Abdellah, Carlos Aguado Sanchez, Anastasia Ailamaki, et al. 2015. “Reconstruction and Simulation of Neocortical Microcircuitry.” Cell 163 (2): 456–92.

Marr, David, and Tomaso Poggio. 1976. “From Understanding Computation to Understanding Neural Circuitry.”

Meunier, Claude, and Idan Segev. 2002. “Playing the Devil’s Advocate: Is the Hodgkin–Huxley Model Useful?” Trends in Neurosciences 25 (11): 558–63.

Naud, Richard, Thomas Berger, Laurent Badel, Arndt Roth, and Wulfram Gerstner. 2009. “Quantitative Single-Neuron Modeling: Competition 2008.” Front. Neur. Conference Abstract: Neuroinformatics 2009, January. https://doi.org/10.3389/conf.neuro.11.2009.08.106.

Omar, Cyrus, Jonathan Aldrich, and Richard C Gerkin. 2014. “Collaborative Infrastructure for Test-Driven Scientific Model Validation.” In Companion Proceedings of the 36th International Conference on Software Engineering, 524–27.

Quadrato, Giorgia, Tuan Nguyen, Evan Z Macosko, John L Sherwood, Sung Min Yang, Daniel R Berger, Natalie Maria, et al. 2017. “Cell Diversity and Network Dynamics in Photosensitive Human Brain Organoids.” Nature 545 (7652): 48–53.

Rall, Wilfrid. 1962. “Electrophysiology of a Dendritic Neuron Model.” Biophysical Journal 2 (2): 145–67.

Ramaswamy, Srikanth, Jean-Denis Courcol, Marwan Abdellah, Stanislaw R Adaszewski, Nicolas Antille, Selim Arsever, Guy Atenekeng, et al. 2015. “The Neocortical Microcircuit Collaboration Portal: A Resource for Rat Somatosensory Cortex.” Frontiers in Neural Circuits 9: 44.

Rastrigin, Leonard Andreevič. 1974. “Systems of Extremal Control.” Nauka.

“Rastrigin Function.” n.d. Wikipedia. https://en.wikipedia.org/wiki/Rastrigin_function.

Ratté, Stéphanie, Sungho Hong, Erik De Schutter, and Steven A Prescott. 2013. “Impact of Neuronal Properties on Network Coding: Roles of Spike Initiation Dynamics and Robust Synchrony Transfer.” Neuron 78 (5): 758–72.

Rocklin, Matthew. 2015. “Dask: Parallel Computation with Blocked Algorithms and Task Scheduling.” In Proceedings of the 14th Python in Science Conference. 130-136. Citeseer.

Rossant, Cyrille, Dan FM Goodman, Bertrand Fontaine, Jonathan Platkiewicz, Anna K Magnusson, and Romain Brette. 2011. “Fitting Neuron Models to Spike Trains.” Frontiers in Neuroscience 5: 9.

Rossant, Cyrille, Dan FM Goodman, Jonathan Platkiewicz, and Romain Brette. 2010. “Automatic Fitting of Spiking Neuron Models to Electrophysiological Recordings.” Frontiers in Neuroinformatics 4: 2.

Schoppe, Oliver, Nicol S Harper, Ben DB Willmore, Andrew J King, and Jan WH Schnupp. 2016. “Measuring the Performance of Neural Models.” Frontiers in Computational Neuroscience 10: 10.

Shepherd, Gordon M. 2015. Foundations of the Neuron Doctrine. Oxford University Press.

Simon, Herbert A. 1956. “Rational Choice and the Structure of the Environment.” Psychological Review 63 (2): 129.

Spruston, Nelson, Michael Hausser, Gregory J Stuart, and others. 2013. “Information Processing in Dendrites and Spines.” In Fundamental Neuroscience. Academic Press.

Steamlit-Team. 2020. “Streamlit 0.69.2 Documentation.” https://docs.streamlit.io/en/stable/.

Stimberg, Marcel, Romain Brette, and Dan FM Goodman. 2019a. “Brian 2, an Intuitive and Efficient Neural Simulator.” Elife 8: e47314.

———. 2019b. “Brian2modelfitting 0.3 Documentation.” github.

Teeter, Corinne, Ramakrishnan Iyer, Vilas Menon, Nathan Gouwens, David Feng, Jim Berg, Aaron Szafer, et al. 2018. “Generalized Leaky Integrate-and-Fire Models Classify Multiple Neuron Types.” Nature Communications 9 (1): 1–15.

Teeters, Jeffery L, Keith Godfrey, Rob Young, Chinh Dang, Claudia Friedsam, Barry Wark, Hiroki Asari, et al. 2015. “Neurodata Without Borders: Creating a Common Data Format for Neurophysiology.” Neuron 88 (4): 629–34.

Toledo, Maria, Martin Telefont, and Henry Markram. 2016. “Electrophysiological Properties of Neurons in the Rat Somatosensory Cortex at Postnatal Day 13-16.” epfl. microcircuits.epfl.ch/#/article/article_4_eph.

Tripathy, Shreejoy J, Judith Savitskaya, Shawn D Burton, Nathaniel N Urban, and Richard C Gerkin. 2014. “NeuroElectro: A Window to the World’s Neuron Electrophysiology Data.” Frontiers in Neuroinformatics 8: 40.

Van Geit, Werner. 2015. “Electrophys Feature Extraction Library (eFEL).” GitHub. https://github.com/BlueBrain/eFEL.

Van Geit, Werner, Pablo Achard, and Erik De Schutter. 2007. “Neurofitter: A Parameter Tuning Package for a Wide Range of Electrophysiological Neuron Models.” Frontiers in Neuroinformatics 1: 1.

Van Geit, Werner, Erik De Schutter, and Pablo Achard. 2008. “Automated Neuron Model Optimization Techniques: A Review.” Biological Cybernetics 99 (4-5): 241–51.

Van Geit, Werner, Michael Gevaert, Giuseppe Chindemi, Christian Rössert, Jean-Denis Courcol, Eilif Benjamin Muller, Felix Schürmann, Idan Segev, and Henry Markram. 2016. “BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience.” Frontiers in Neuroinformatics 10 (17). https://doi.org/10.3389/fninf.2016.00017.

Van Geit, Werner, Michael Gevaert, Giuseppe Chindemi, Christian Rössert, Jean-Denis Courcol, Eilif B Muller, Felix Schürmann, Idan Segev, and Henry Markram. 2016. “BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience.” Frontiers in Neuroinformatics 10: 17.

Vella, Mike, and Padraig Gleeson. 2012. “Neurotune.” GitHub. https://github.com/NeuralEnsemble/neurotune.

Wang, L, S Rich, L Zhang, PL Carlen, SJ Tripathy, TA Valiante, and others. 2019. “Sag Currents Are a Major Contributor to Human Pyramidal Cell Intrinsic Differences Across Cortical Layers and Between Individuals.”

Zhu, J Julius. 2000. “Maturation of Layer 5 Neocortical Pyramidal Neurons: Amplifying Salient Layer 1 and Layer 4 Inputs by Ca2+ Action Potentials in Adult Rat Tuft Dendrites.” The Journal of Physiology 526 (3): 571–87.

Zou, Hui, Trevor Hastie, and Robert Tibshirani. 2006. “Sparse Principal Component Analysis.” Journal of Computational and Graphical Statistics 15 (2): 265–86.